73 research outputs found
A quantitative probabilistic investigation into the accumulation of rounding errors in numerical ODE solution.
We examine numerical rounding errors of some deterministic solvers for systems of ordinary differential equations (ODEs) from a probabilistic viewpoint. We show that the accumulation of rounding errors results in a solution which is inherently random and we obtain the theoretical distribution of the trajectory as a function of time, the step size and the numerical precision of the computer. We consider, in particular, systems which amplify the effect of the rounding errors so that over long time periods the solutions exhibit divergent behaviour. By performing multiple repetitions with different values of the time step size, we observe numerically the random distributions predicted theoretically. We mainly focus on the explicit Euler and fourth order RungeāKutta methods but also briefly consider more complex algorithms such as the implicit solvers VODE and RADAU5 in order to demonstrate that the observed effects are not specific to a particular method
Recommended from our members
OntoKin: An Ontology for Chemical Kinetic Reaction Mechanisms.
An ontology for capturing both data and the semantics of chemical kinetic reaction mechanisms has been developed. Such mechanisms can be applied to simulate and understand the behavior of chemical processes, for example, the emission of pollutants from internal combustion engines. An ontology development methodology was used to produce the semantic model of the mechanisms, and a tool was developed to automate the assertion process. As part of the development methodology, the ontology is formally represented using a web ontology language (OWL), assessed by domain experts, and validated by applying a reasoning tool. The resulting ontology, termed OntoKin, has been used to represent example mechanisms from the literature. OntoKin and its instantiations are integrated to create a knowledge base (KB), which is deployed using the RDF4J triple store. The use of the OntoKin ontology and the KB is demonstrated for three use cases-querying across mechanisms, modeling atmospheric pollution dispersion, and as a mechanism browser tool. As part of the query use case, the OntoKin tools have been applied by a chemist to identify variations in the rate of a prompt NOx formation reaction in the combustion of ammonia as represented by four mechanisms in the literature
Recommended from our members
An Ontology and Semantic Web Service for Quantum Chemistry Calculations.
The purpose of this article is to present an ontology, termed OntoCompChem, for quantum chemistry calculations as performed by the Gaussian quantum chemistry software, as well as a semantic web service named MolHub. The OntoCompChem ontology has been developed based on the semantics of concepts specified in the CompChem convention of Chemical Markup Language (CML) and by extending the Gainesville Core (GNVC) ontology. MolHub is developed in order to establish semantic interoperability between different tools used in quantum chemistry and thermochemistry calculations, and as such is integrated into the J-Park Simulator (JPS)-a multidomain interactive simulation platform and expert system. It uses the OntoCompChem ontology and implements a formal language based on propositional logic as a part of its query engine, which verifies satisfiability through reasoning. This paper also presents a NASA polynomial use-case scenario to demonstrate semantic interoperability between Gaussian and a tool for thermodynamic data calculations within MolHub.This project is supported by the National Research Foundation (NRF), Prime Ministerās Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme, and by the Alexander von Humboldt foundation
Recommended from our members
Improved methodology for performing the inverse Abel transform of flame images for color ratio pyrometry.
A new method is presented for performing the Abel inversion by fitting the line-of-sight projection of a predefined intensity distribution (FLiPPID) to the recorded 2D projections. The aim is to develop a methodology that is less prone to experimental noise when analyzing the projection of axisymmetric objects-in this case, co-flow diffusion flame images for color ratio pyrometry. A regression model is chosen for the light emission intensity distribution of the flame cross section as a function of radial distance from the flame center line. The forward Abel transform of this model function is fitted to the projected light intensity recorded by a color camera. For each of the three color channels, the model function requires three fitting parameters to match the radial intensity profile at each height above the burner. This results in a very smooth Abel inversion with no artifacts such as oscillations or negative values of the light source intensity, as is commonly observed for alternative Abel inversion techniques, such as the basis-set expansion or onion peeling. The advantages of the new FLiPPID method are illustrated by calculating the soot temperature and volume fraction profiles inside a co-flow diffusion flame, both being significantly smoother than those produced by the alternative inversion methods. The developed FLiPPID methodology can be applied to numerous other optical techniques for which smooth inverse Abel transforms are required
Predicting Power Conversion Efficiency of Organic Photovoltaics: Models and Data Analysis.
Funder: Cambridge TrustFunder: National Research Foundation SingaporeFunder: Alexander von Humboldt-StiftungFunder: China Scholarship CouncilIn this paper, the ability of three selected machine learning neural and baseline models in predicting the power conversion efficiency (PCE) of organic photovoltaics (OPVs) using molecular structure information as an input is assessed. The bidirectional long short-term memory (gFSI/BiLSTM), attentive fingerprints (attentive FP), and simple graph neural networks (simple GNN) as well as baseline support vector regression (SVR), random forests (RF), and high-dimensional model representation (HDMR) methods are trained to both the large and computational Harvard clean energy project database (CEPDB) and the much smaller experimental Harvard organic photovoltaic 15 dataset (HOPV15). It was found that the neural-based models generally performed better on the computational dataset with the attentive FP model reaching a state-of-the-art performance with the test set mean squared error of 0.071. The experimental dataset proved much harder to fit, with all of the models exhibiting a rather poor performance. Contrary to the computational dataset, the baseline models were found to perform better than the neural models. To improve the ability of machine learning models to predict PCEs for OPVs, either better computational results that correlate well with experiments or more experimental data at well-controlled conditions are likely required
Comment on āA spherical cavity model for quadrupolar dielectricsā [J. Chem. Phys. 144, 114502 (2016)]
The dielectric properties of a fluid composed of molecules possessing both dipole and quadrupole moments are studied based on a model of the Onsager type (molecule in the centre of a spherical cavity). The dielectric permittivity Īµ and the macroscopic quadrupole polarizability Ī±Q of the fluid are related to the basic molecular characteristics (molecular dipole, polarizability, quadrupole, quadrupolarizability). The effect of Ī±Q is to increase the reaction field, to bring forth reaction field gradient, to decrease the cavity field and to bring forth cavity field gradient. The effects from the quadrupole terms are significant in the case of small cavity size in a non-polar liquid. The quadrupoles in the medium are shown to have small but measurable effect on the dielectric permittivity of several liquids (Ar, Kr, Xe, CH4, N2, CO2, CS2, C6H6, H2O, CH3OH). The theory is used to calculate the macroscopic quadrupolarizabilities of these fluids as functions of pressure and temperature. The cavity radii are also determined for these liquids, and it is shown that they are functions of density only. This extension of Onsagerās theory will be important for non-polar solutions (fuel, crude oil, liquid CO2), especially at increased pressures
Modelling TiO 2 formation in a stagnation flame using method of moments with interpolative closure
The stagnation flame synthesis of titanium dioxide nanoparticles from titanium tetraisopropoxide (TTIP) is modelled based on a simple one-step decomposition mechanism and one-dimensional stagnation flow. The particle model, which accounts for nucleation, surface growth, and coagulation, is fully-coupled to the flow and the gas phase chemistry and solved using the method of moments with interpolative closure (MoMIC). The model assumes no formation of aggregates considering the high temperature of the flame. In order to account for the free-jet region in the flow, the computational distance, H = 1.27 cm, is chosen based on the observed flame location in the experiment (for nozzle-stagnation distance, L = 3.4 cm). The model shows a good agreement with experimentally measured mobility particle size for stationary stagnation surface with varying TTIP loading, although the particle geometric standard deviation, GSD, is underpredicted for high TTIP loading. The particle size is predicted to be sensitive to the sampling location near the stagnation surface in the modelled flame. The sensitivity to the sampling location is found to increase with increasing precursor loading and stagnation temperature. Lastly, the effect of surface growth is evaluated by comparing the result with an alternative reaction model. It is found that surface growth plays an important role in the initial stage of particle growth which, if neglected, results in severe underprediction of particle size and overprediction of particle GSD.NRF (Natl Research Foundation, Sāpore)Accepted versio
Knowledge Engineering in Chemistry: From Expert Systems to Agents of Creation.
Passing knowledge from human to human is a natural process that has continued since the beginning of humankind. Over the past few decades, we have witnessed that knowledge is no longer passed only between humans but also from humans to machines. The latter form of knowledge transfer represents a cornerstone in artificial intelligence (AI) and lays the foundation for knowledge engineering (KE). In order to pass knowledge to machines, humans need to structure, formalize, and make knowledge machine-readable. Subsequently, humans also need to develop software that emulates their decision-making process. In order to engineer chemical knowledge, chemists are often required to challenge their understanding of chemistry and thinking processes, which may help improve the structure of chemical knowledge.Knowledge engineering in chemistry dates from the development of expert systems that emulated the thinking process of analytical and organic chemists. Since then, many different expert systems employing rather limited knowledge bases have been developed, solving problems in retrosynthesis, analytical chemistry, chemical risk assessment, etc. However, toward the end of the 20th century, the AI winters slowed down the development of expert systems for chemistry. At the same time, the increasing complexity of chemical research, alongside the limitations of the available computing tools, made it difficult for many chemistry expert systems to keep pace.In the past two decades, the semantic web, the popularization of object-oriented programming, and the increase in computational power have revitalized knowledge engineering. Knowledge formalization through ontologies has become commonplace, triggering the subsequent development of knowledge graphs and cognitive software agents. These tools enable the possibility of interoperability, enabling the representation of more complex systems, inference capabilities, and the synthesis of new knowledge.This Account introduces the history, the core principles of KE, and its applications within the broad realm of chemical research and engineering. In this regard, we first discuss how chemical knowledge is formalized and how a chemist's cognition can be emulated with the help of reasoning algorithms. Following this, we discuss various applications of knowledge graph and agent technology used to solve problems in chemistry related to molecular engineering, chemical mechanisms, multiscale modeling, automation of calculations and experiments, and chemist-machine interactions. These developments are discussed in the context of a universal and dynamic knowledge ecosystem, referred to as The World Avatar (TWA).This research was supported by the National Research Foundation, Prime Ministerās Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. AK and MK thank the Humboldt Foundation (Berlin, Germany) and the Isaac Newton Trust (Cambridge, UK) for a Feodor Lynen Fellowship. JB acknowledges financial support provided by CSC Cambridge International Scholarship from Cambridge Trust and China Scholarship Council. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising
Recommended from our members
Question Answering System for Chemistry.
This paper describes the implementation and evaluation of a proof-of-concept Question Answering (QA) system for accessing chemical data from knowledge graphs (KGs) which offer data from chemical kinetics to the chemical and physical properties of species. We trained the question classification and named the entity recognition models that specialize in interpreting chemistry questions. The system has a novel design which applies a topic model to identify the question-to-ontology affiliation to handle ontologies with different structures. The topic model also helps the system to provide answers with a higher quality. Moreover, a new method that automatically generates training questions from ontologies is also implemented. The question set generated for training contains 432,989 questions under 11 types. Such a training set has been proven to be effective for training both the question classification model and the named entity recognition model. We evaluated the system using other KGQA systems as baselines. The system outperforms the chosen KGQA system answering chemistry-related questions. The QA system is also compared to the Google search engine and the WolframAlpha engine. It shows that the QA system can answer certain types of questions better than the search engines
Recommended from our members
Marie and BERT-A Knowledge Graph Embedding Based Question Answering System for Chemistry.
This paper presents a novel knowledge graph question answering (KGQA) system for chemistry, which is implemented on hybrid knowledge graph embeddings, aiming to provide fact-oriented information retrieval for chemistry-related research and industrial applications. Unlike other existing designs, the system operates on multiple embedding spaces, which use various embedding methods and queries the embedding spaces in parallel. With the answers returned from multiple embedding spaces, the system leverages a score alignment model to adjust the answer scores and rerank the answers. Further, the system implements an algorithm to derive implicit multihop relations to handle the complexities of deep ontologies and improve multihop question answering. The system also implements a BERT-based bidirectional entity-linking model to enhance the robustness and accuracy of the entity-linking module. The system uses a joint numerical embedding model to efficiently handle numerical filtering questions. Further, it can invoke semantic agents to perform dynamic calculations autonomously. Finally, the KGQA system handles numerous chemical reaction mechanisms using semantic parsing supported by a Linked Data Fragment server. This paper evaluates the accuracy of each module within the KGQA system with a chemistry question data set
- ā¦